专利摘要:
A computer-implemented method for managing a human-machine graphical interface, comprising the steps of receiving information relating to the position of the eyes and the direction of a user's gaze on the interface; receive physiological information from the user; determine a level of cognitive load based on the physiological information received; adjust the display of the interface according to the direction of gaze and / or the level of cognitive load determined. Developments describe the management of the display areas (foveal area and peripheral areas), the selection of one or more displays, the management of the distance from the display of messages to the current location of the gaze, the management of the criticality of the messages, various graphic modalities to attract attention, the management of the flight context in the avionic case, the management of the visual density, etc. System aspects are described (virtual and / or augmented reality).
公开号:FR3077900A1
申请号:FR1800125
申请日:2018-02-12
公开日:2019-08-16
发明作者:Stephanie LAFON;Alexiane Bailly;Sebastien Dotte
申请人:Thales SA;
IPC主号:
专利说明:

PERIPHERAL VISION IN A MAN-MACHINE INTERFACE
Field of the invention
The invention relates to the technical field of display methods and systems in general, and more specifically that of multimodal interfaces within an aircraft cockpit.
State of the art
In the cockpit of a contemporary aircraft, the pilot receives and manipulates a large amount of information. In certain circumstances (e.g. flight context or type of mission), the pilot may be overloaded with information to the point of not responding to requests communicated by the avionics systems. In some cases, the pilot may not even perceive the stimuli. Indeed, a pilot overloaded or distracted by other tasks may not detect important information that the system would present to him. In aeronautical history, it has happened that a pilot, under the effect of stress, did not perceive an audio alert. The consequences can turn out to be dramatic.
In addition, human-machine interfaces are becoming increasingly complex. For the purposes in particular of aeronautical security and / or safety, there is therefore an ever-increasing need for improved management of man-machine interfaces (in the broad sense, that is to say in a multimodal manner ie involving different sensory capacities ).
Existing avionics systems notify pilots of system failures through audible alerts or specific voice messages. Additional light indicators are sometimes used, in redundancy. When it receives a datalink-type message, for example from air traffic control, an audio signal can be emitted in the cockpit and specialized human-machine interfaces can signal the arrival of messages. The pilot can then react, in particular by tactile interaction with the interface equipment located in the cockpit (e.g. sending messages using a physical keyboard. These existing solutions have faults or inadequacies.
The literature published on human-machine interfaces is rich, but aeronautical specificities remain partially blank, at least on recent aspects.
Patent document US2016272340 entitled "Aircraft-vision Systems and methods for maintaining situational awareness and spatial orientation" discloses display systems and methods for maintaining situational awareness and pilot spatial orientation, particularly under conditions difficult. An image with visual cues is projected in a peripheral zone of the visual field so that these cues stimulate the peripheral vision of the pilot. This approach has limitations.
There is a need for systems and methods for managing display at the edge of the field of vision.
Summary of the invention
The invention relates to a computer-implemented method for managing a graphical human-machine interface, comprising the steps consisting in receiving information relating to the position of the eyes and the direction of gaze of a user on the interface; receive physiological information from the user; determining a level of cognitive load based on the physiological information received; adjust the interface display according to the direction of gaze and / or the level of cognitive load determined. Developments describe the management of display areas (foveal area and peripheral areas), the selection of one or more displays, the management of the distance from the display of messages to the current location of the gaze, the management of the criticality of the messages, various graphic methods to attract attention, the management of the flight context in the avionics case, the management of visual density, etc. System aspects are described (virtual and / or augmented reality).
In general, the examples provided facilitate man-machine interactions and in particular relieve the pilot of tedious manipulations, sometimes repetitive and often complex, thereby improving his ability to concentrate for piloting proper. These manipulations and operations frequently take place in emergency contexts or requiring rapid reaction. The cognitive effort to be provided for driving or driving is optimized, or more exactly reallocated to cognitive tasks more useful with regard to the steering objective. In other words, the technical effects linked to certain aspects of the invention correspond to a reduction in the cognitive load of the user of the human-machine interface.
The improvements and improvements according to the invention are advantageous in that they reduce the risks of human failures in the context of human-machine interaction (e.g. risk of deletion or non-consideration of critical data etc.). The embodiments of the invention therefore contribute to improving aeronautical safety and security (more generally vehicle piloting).
Certain embodiments make it possible to alert or inform the pilot, visually as to a particular event as a function of the cognitive load which he undergoes (e.g. without disturbing him in their current task). In contrast, existing solutions are fixed or static in the sense that they do not take into account the cognitive load of the pilot (an audible alert may not be perceived during a heavy workload, eg a message or a light indicator can get out of the pilot's visual field because peripheral vision may be reduced).
In addition, certain embodiments make it possible to alert or inform the pilot, visually about a particular event, regardless of where he is looking. The embodiments of the invention therefore optimize the solicitation and the use of the pilot's field of vision.
In certain advantageous embodiments, the man-machine interface according to the invention is partially or completely controllable with the gaze. For example, the pilot can interact with the interface without the hands and with his only glance. For example, he can answer (confirm or deny) a question or request from the interface. He can also initiate an action. This interaction modality is safer in that it allows the pilot to keep his hands on the controls of the aircraft. In contrast, the approaches known in the state of the art require at least tactile interventions while a pilot may sometimes not be able to use his hands (when he is in control of his aircraft for example).
By expanding display methods to augmented and / or virtual reality environments, the invention opens up new possibilities for interaction. In contrast, the human-machine interfaces according to the state of the art in avionics are generally designed in relation to the display formats (limited see cramped). New possibilities for augmented or virtual interaction make it possible to redefine the human-machine interfaces themselves. For example, the user's field of vision can be used better and more intensively. A real interactive dialogue between the machine and the man can take place, for example to maintain a high level of attention, or exploit the latter to the best. In this case, the embodiments of the invention make it possible to increase the display surface used or usable at the periphery of the field of vision, by optimizing the use of this visual area.
In an advantageous embodiment of the invention, the best place to display information is selected in real time, in particular according to the pilot's gaze.
In an advantageous embodiment of the invention, a mobile icon or symbol is displayed in the peripheral field of the pilot, at a distance dependent on his cognitive load.
In an advantageous embodiment of the invention, the interface is controllable from the gaze.
Description of the figures
Various aspects and advantages of the invention will appear in support of the description of a preferred mode of implementation of the invention but not limiting, with reference to the figures below:
FIG. 1 illustrates an example of a human-machine interface in the particular context of avionics, an interface which is manipulated by the method according to the invention;
Figure 2 illustrates a specific display system;
Figure 3 shows examples of process steps according to the invention;
Figure 4 details a particular embodiment;
FIG. 5 illustrates an example of management of a message according to a particular embodiment of the invention.
detailed description
According to the embodiments of the invention, an aircraft can be a commercial, military or freight aircraft, an autonomous or remote-controlled drone, a helicopter, or any other means of transport which can use a man-machine interface. The invention is not limited to aeronautical applications: the embodiments can be applicable to other types of vehicle (e.g. car, truck, bus, train, boat, etc.).
The term "display device" manipulated by the invention designates any display system interposed or interposed between the visual system (ie the eyes) of a user (eg surgeon, pilot, computer scientist, etc.) and his external environment (which therefore provides a "visual background").
The "visual field" literally designates the portion of space seen by an eye. An eye perceives in this space area lights, colors, shapes, textures, etc. By extension, both eyes perceive a portion of space in stereoscopy. The vision can be of different types: monocular or binocular. This visual field has several areas. The maximum acuity corresponds to the foveal zone; other areas allow reading, symbol recognition, and color discrimination. The driver's field of vision changes with eye movement: for example, the field of vision narrows with speed (in fact, the eyes become less mobile).
In the remainder of the document, the expression “location of the gaze” designates the center of the foveal vision (center of the area perceived by the cone of vision).
“Peripheral” vision designates a characteristic of human vision. Regarding foveal vision, the eye stops for 200 to 400 milliseconds on a fixation point in order to obtain high resolution details. Foveal vision provides a detailed but slow analysis (3 to 4 "images" per second). In contrast, peripheral vision provides more general (even distorted) impressions of the field of vision but very fast (up to 100 "images" per second). Peripheral vision therefore allows the ultra-rapid perception of movements. This ability to detect movements increases even towards the extreme periphery. It is estimated that peripheral vision covers more than 99% of the field of vision and is associated with half of the optic nerve and the visual cortex.
Visual acuity is maximum in the foveal area (5 ° FOV for "Field of View", solid angle depending on the direction of gaze). Reading is generally done in an interval exceeding the foveal zone (10 ° FOV). Symbol recognition is performed in an interval of 20 ° FOV and color discrimination is generally performed in an interval of 30 ° FOV.
Vision is said to be macular up to 18 ° FOV, then beyond that it is said to be peripheral. This peripheral vision can be broken down into “near”, “average” or “distant” periphery (respectively “near, mid, far peripheral” in English), according to different thresholds (generally 60 ° and 120 ° FOV). The term "peripheral vision" in this document refers to these different areas.
The periphery of the field of vision is relative to the position of the user's head. The head being able to move, to orient itself (eg on the sides, downwards, upwards), it is implicit in the rest of the document that the tracking of the pilot's head ("head-tracking" in English) can be carried out (if the geometrical particularities of vision require it).
The foveal zone and the peripheral vision zones are part of the (subjective) visual field, which includes several independent or dependent components: the visual background (distant environment eg mountains, clouds, road, etc.), fixed position displays in the vehicle (but which can move relative to the position of the user's head) and the displays worn by the user (opaque, transparent or semi-transparent), which are generally secured to the user's head .
In the embodiments in which the man-machine interface is at least partially worn (eg virtual reality helmet VR or augmented reality RA, transparent or semi-transparent), the foveal zone and the peripheral zones move with the direction of the look. In this type of VR / AR display, the visual background (ie the different planes or depths of display) can be modified or adjusted (by comparison with a natural and invariant exterior on the horizon when the VR or AR is not not used). In other words, in this type of VR / AR environment, there are one or more intermediate, artificial display layers or depths (configurable and adjustable), sandwiched between reality and the visual system, which can be manipulated so as to manage this foveal zone and these peripheral zones. For example, the graphic rendering (graphic rendering ie photometric precision and quality of images, number of polygons, latency and graphic refreshment, etc.) can be particularly optimized for the foveal zone, while the peripheral zones can request less computing power. In other scenarios, the trade-offs may be different, if not the reverse.
In some embodiments, the external visual environment can be perceived by transparency (in addition to a display worn by the user, elements of the cockpit dashboard can be perceived). In other words, virtual elements as much as "real" elements can be perceived.
FIG. 1 illustrates an example of a human-machine interface in the particular context of avionics, an interface which is manipulated by the method according to the invention.
The cockpit 100 of an aircraft (or of a remote control cabin, or of a vehicle in the broad sense) may comprise a dashboard 110 comprising one or more displays. A human-machine interface according to the invention can comprise several display systems, for example arranged on a dashboard or in the cockpit and / or carried by the pilot. A display system or human-machine interface according to the invention may comprise one or more displays (“screens” or “displays”) chosen from a head-up display 120, a head-down display 130, a portable or transportable screen 140, a display in virtual and / or augmented reality 150, one or more projectors 160. The man-machine interface can also include a camera or image acquisition device 170 as well as interfaces or input peripherals 180.
A head-up display 120 can be of the HUD type (in English "Head Up Display", HUD). A head-up or head-up display allows the driver of a vehicle to monitor his environment along with the information provided by his on-board instruments. This type of device superimposes specific information (e.g. piloting, navigation, mission) in the pilot's field of vision. Various devices are known in the prior art, in particular the use of projectors, semi-transparent mirrors, transparent (e.g. augmented reality) or opaque (e.g. virtual reality) projection surfaces, see the opacity of which is configurable. In some embodiments, a head-up display is of the binocular type (i.e. solicits both eyes). In other cases, the display is of monocular type (i.e. mobilizing only one eye). In such monocular head-mounted systems, information is superimposed on the environment. A HUD system can therefore be of the monocular type (eg Google Glass connected glasses), but also of the binocular type (eg a “Head-Mounted Display” or HMD system which delivers two different images, one for each eye). A display system in some cases can also be "bi-ocular" (presenting the same images in both eyes). A windshield on which driving information is projected provides different images for the right eye and the left eye and therefore falls into the category of binocular display systems.
A head-down display 130 is a display disposed below a main viewing area. A head-down display used by the invention can for example be a PFD main flight screen (and / or an ND / VD navigation screen and / or a MFD multifunction screen). More generally, the screens can be avionics screens of the flight management type or “Flight Management System”. In the case of a land vehicle, the head-down display can for example designate an integrated GPS screen.
A portable (or transportable) screen 140 may include avionics screens and / or non-avionics means of the “Electronic Flight Bag” type (and / or electronic bag) and / or augmented and / or virtual reality means.
According to the embodiments, the displays can designate or include displays in virtual and / or augmented reality 150. Such display systems can in fact be worn by the pilot or driver. The display can therefore be individual and include an opaque virtual reality headset or a semi-transparent augmented reality headset (or a headset with configurable transparency). The helmet can therefore be a wearable computer, glasses, a head-mounted display, etc. Means of virtual or augmented reality can designate or include avionic systems of the EVS (Enhanced Vision System) or SVS (Synthetic Vision System) type.
The display in the cockpit or driving the vehicle (the man-machine interface according to the invention) can also include one or more projectors 160. A projector can for example be a pico-projector or a video projector (for example laser). The information displayed to the pilot can indeed be entirely virtual (displayed in the individual helmet), or entirely real (for example projected on the flat surfaces available in the real environment of the cockpit), or even a combination of the two (ie partly a virtual display superimposed or merged with reality and partly a real display via projectors). The use of projectors makes it possible in particular to reconfigure the pilot's immersion space (the curved surfaces of the dashboard can be taken into account so as to create on demand a global merged display, by distorting the projected images). The distribution of the information projection and / or of masking can be configurable, in particular as a function of the immersive visual context of the user. This distribution can lead to an opportunistic consideration of the environment by considering all the available surfaces so as to add (superimpose, superimpose) virtual information, chosen appropriately in their nature (what to display), temporality (when to display, at what frequency) and location (priority of displays, stability of locations versus nature, so as not to disorient the user, a certain consistency may be desirable). At one extreme, all of the places little or weakly used in the user's environment can be exploited in order to densify the display of information. The virtual / real merged display can therefore become very fragmented (depending on the complexity of the user who is himself in a white room, or in a room provided with numerous equipments, therefore at least a part can be hidden or hidden by the display of accepted or selected information). Reality is therefore transformed into as many potential screens.
All the places little or weakly used in the environment of the user can be exploited in order to densify the display of information. Furthermore, by projecting masks of images superimposed on real objects, the display can "erase" one or more control instruments physically present in the cockpit (joysticks, buttons, actuators) whose geometry is known and stable to increase more again the addressable surfaces. The real cockpit environment can therefore be transformed into as many potential screens, or even into a single unified screen (reconfiguration of the real).
Optionally, the display in the cockpit or vehicle driver's cabin may include one or more image acquisition cameras 170. A camera may be fish-eye, stereoscopic, or the like. This feedback allows many advantageous developments of the invention. A camera or a video camera placed in the cockpit can make it possible to capture at least part of all the visual information displayed intended for the pilot (advantageously, this video return can be placed on a head-up visor, smartglasses or any other equipment worn by the pilot, so as to capture the subjective view of the pilot). By image analysis (carried out on a regular fixed basis or continuously in the case of a video capture), the subjective view of the pilot can be analyzed and modified or corrected, according to predefined criteria and / or according to predefined objectives. .
For example, in one embodiment, the visual density of the information that is displayed can be estimated. For example, this density can be estimated according to different sub-parts of images and display adjustments 5 can be determined dynamically. For example, in the event that a display screen becomes too "cluttered" (quantity of text or graphic symbols in excess compared to one or more predefined thresholds), the information with the lowest priority may be "reduced" or "condensed" "Or" synthesized "in the form of markers or symbols.
Conversely, if the density of information displayed allows it, reduced or condensed or synthesized information can be developed or detailed or extended or enlarged. This management of the presentation of information can be a function of different parameters (discussed below).
The display in the cockpit or vehicle driver's cabin includes one or more gaze tracking devices 180 ("eye tracking").
The "eye tracking" used by the method according to the invention determines over time the position of the eyes (origin of the vector) and the direction of the gaze (the vector). For simplification, the determination of the position of the eyes is implicit.
Eye tracking techniques (in English "eye-tracking" or "gauze-tracking") can indeed track eye movements. In one embodiment, an eye tracker analyzes images of the human eye recorded by a camera, often in infrared light, to calculate the direction of the pilot's gaze. In one embodiment, the variations of electrical potentials on the surface of the facial skin are analyzed. In one embodiment, the disturbances induced by a lens on a magnetic field are analyzed. By eye tracking, it can be determined where the pilot’s eye is looking and so it can be determined what he sees (and what he does not see).
Different eye tracking techniques can be used, or even combined.
In one embodiment, the gaze is monitored using an electro-oculographic technique (Marg, 1951), by measuring differences in bioelectric potential, resulting from the retino-corneal bioelectric field modulated by the rotations of the eye in its orbit (electro-oculograms performed with surface electrodes).
In one embodiment, the gaze is monitored using a technique known as limbo (Torok et al. 1951). By illuminating the limbus of the eye (separation between the sclera and the iris), the quantity of reflected light depends on the relative surface of the sclera and the iris in the measurement field, and therefore makes it possible to identify the eye position.
In one embodiment, the gaze is monitored using techniques based on the principles stated by Hirschberg in 1985. It is indeed possible to determine the orientation of the gaze by locating the position of the reflection of a light source on the cornea of the eye (corneal reflection) in relation to the pupil: a camera can detect the movement of the eye exploring an image. A quantitative analysis of eye movement can then be carried out (number and duration of gaze fixations, number and amplitude of saccades, etc.). This method allows in particular absolute measurements of the different positions of the eye, and this independently of the movements of the user's head.
In one embodiment, the gaze is monitored by measuring the reflections of light on the various structures of the eye (Purkinje images; inner and outer faces of the cornea and anterior and posterior faces of the lens). One or more cameras focused on one or both eyes can record / analyze their movements. The variants can be mounted on the head ("Head-mounted Systems"). Other variants are non-intrusive systems. These variants can use ambient light or infrared light (e.g. with one or more diodes). The acquisition frequency varies between 30 and thousands of Hz. Some non-intrusive systems are calibrated.
In one embodiment, the gaze is monitored by tracking a 3D model of the face. According to this variant, images are acquired in order to detect the face, the pupils, the mouth and the nostrils. Subsequently, a 3D model is used to assess the orientation of the face and finally the orientation of the gaze is estimated using the images of the eyes.
In one embodiment, eye tracking is carried out using the so-called "glint" approach. Using the PCCR technique for "Pupil Center / Corneal Reflection", the angle of the visual axis and the location of the gaze are determined by following the relative position of the pupil and the point of the light reflected from the cornea
In one embodiment, the gaze is monitored using a Tobii type device (eye tracking remotely with a light source close to infrared, then image processing according to a physiological 3D model of the eye to estimate the position of the eyes and the direction of gaze).
The human-machine interface according to the invention may also include interfaces or input peripherals 190. In one development, the device comprises means for selecting one or more portions of the virtual display. The pointing of the human-machine interfaces (HMI) or of portions of these interfaces or information can be accessible via various devices, for example a pointing device of “mouse” type or a designation based on a manual pointing; via acquisition interfaces (button, wheel, joystick, keyboard, remote control, motion sensors, microphone, etc.), via combined interfaces (touch screen, force feedback control, gloves or glove, etc.). The input or selection human-machine interfaces can in fact include one or more selection interfaces (menus, pointers, etc.), graphical interfaces, voice interfaces, gesture and position interface. In an advantageous embodiment, a selection can be made by looking (eg duration of fixation in excess of a predefined duration threshold, blinking of the eye, concomitant voice command, contraction of a muscle, foot command, etc.). In one embodiment, a selection can be made by one or more head movements.
In one embodiment, the system knows at all times the direction of the pilot's gaze and the position of its eyes, which allows it to choose the appropriate display for displaying the messages.
The selected display can be various (nature) and a plurality of spaces or surfaces (e.g. planes, curves, etc.) can be used. A display can be a head-down display, a HUD, a helmet visor or a windshield. A display can also result from a projection. In certain embodiments, the projection spaces are selected in an “opportunistic” manner (for example, the unused spaces of the dashboard are used e.g. the uprights or the interstitial spaces between the screens). In one embodiment, one or more spaces can be predefined for projections (they can be intentionally dedicated to this task). For example, a free area in the cockpit can allow a projector to display information. Generally, nothing limits this freedom of projection which can be carried out on any type of support (material eg plastics, fabrics, glass, etc. including human body), since projection systems can adapt their display of so as to conform to the environment and produce stable and formed images, knowing the target subjective point of view.
Figure 2 illustrates a 200 head-up or HUD display system.
The type of optional display system that is represented displays information (for example of piloting) superimposed on the external landscape (visible by transparency or as captured and retransmitted in video). This display system 200 may include a transparent display 210 on which an image can be projected or created making it possible to see this image superimposed on an "outdoor" scene. A monocular system has a single display. A binocular system has two displays. In variant embodiments, the transparency of a display is variable (or configurable). A display is attached or associated with the pilot's head so that the display is kept close to the eye ("Near-To-Eye display"). In the example illustrated, the display is attached to a helmet worn by the pilot. Other means are possible (e.g. glasses worn, fixed support which the operator approaches, etc.). In certain embodiments, a HUD display can be a device fixed to the aircraft, offering a fixed field of view (in general having a field of view 40 ° lateral - 26 ° vertical). In other embodiments, the head-up display is attached to a helmet or a view device worn by the pilot. In one embodiment, one or more projectors display information on the windshield of the vehicle (e.g. plane or car), or even on free areas of the cockpit (for example).
This display system 200 can be associated with a system 230 making it possible to detect the direction towards / in which the head ie of the operator is directed (gaze tracking system is used ("eye-tracking"). This type of measurement is diverse (optical, electrical, mechanical, etc.). The system 230 can be coupled to the display 200 but it can also be placed elsewhere in the cockpit in the absence of a helmet worn by the pilot (he can face the pilot, to measure for example the dilation and the direction of the pupils).
The display system 200 illustrated in the figure may include or be associated with a calculator i.e. computer resources 240 (e.g. calculation, memory, graphics, etc.). The computer 240 can control (control, slave) the projector 220. The computer can use the information relating to the monitoring 230 of the direction of the head and / or of the gaze. It can integrate and manipulate various information relating to the aircraft and / or the mission. It determines at all times (continuously, depending on the desired video refresh) the operational image to be displayed on the display.
The display system 200 can be associated with one or more human-machine interfaces (HMI), eg input devices (mouse, keyboard, touch screen, force touch, haptic means, trackpad, trackball, etc.), allowing the pilot to make selections among several proposed data or to enter data. Depending on the embodiments, ΙΊΗΜ can include different peripherals or combinations of peripherals. In some cases, voice commands can be used. In other cases, neural interfaces can be used. Wink selection interfaces can be used.
FIG. 3 illustrates examples of steps of an embodiment.
In one embodiment, a method implemented by computer for managing a human-machine interface comprising a plurality of displays is described, the method comprising the steps consisting in: - receiving information relating to the position of the eyes and the direction of a user's gaze on the man-machine interface; - receive physiological information from the user; - determining a level of cognitive load among a plurality of predefined levels as a function of the physiological information received; - adjust the display of the man-machine interface according to the direction of gaze and the level of cognitive load determined.
Monitoring the overall state of the pilot can lead to adapting or reconfiguring the display in a certain way, for example by densifying or reducing the screens, etc.). The overall state of the pilot may include various factors such as a) cognitive load, b) the level of stress correlated to the flight phase or to other external physical events or parameters such as sound level, c) physiological parameters or pilot's biological eg heart rate and sweating inferring stress level estimates. The weighting or hierarchy between these different factors can be static or evolving / dynamic, configurable or preconfigured.
In one embodiment, the method further comprises a step consisting of displaying a graphic message at the periphery of the man-machine interface at a configurable distance from the location of the man-machine interface on which the gaze of the user.
In one embodiment, the graphic message can be displayed on a plurality of displays to follow the pilot's gaze.
In one embodiment, the graphic message can be recorded on a single screen. In other embodiments, the graphic message can "switch" from display to display, so as to ensure visual continuity. Like the other examples provided, it may be that the message follows the gaze through the various displays, "freezes" when the pilot wants to process it. It can then open an interactive interface to the gaze. The message, once observed or perceived, can go to its default location (pilot training) to be processed later by the latter (possibly).
In one embodiment, the distance is a function of the cognitive load and / or of the priority associated with the message. The term "priority" can be replaced by "criticality". The higher the cognitive load and / or the more critical the message, the closer it gets to the center of the foveal zone. This distance can also be reduced over time the more critical the message becomes over time.
In one embodiment, the distance decreases over time.
In some cases (critical messages), it may be intentional to seek the pilot's attention until confirmed by the pilot that he has consciously accessed the information.
In an advantageous embodiment of the invention, the graphic symbol indeed follows the gaze of the pilot (with change of display if necessary) as long as the latter has not perceived it (and therefore manifestly confirmed).
In one embodiment, the graphic message is displayed according to graphic modalities comprising displacements in translation and / or rotations, such as vibrations, these displacements and / or rotations being a function of the content of the message.
The “content” of the message can indicate its priority and / or its criticality.
Vibrations (i.e. movements of small amplitude) in peripheral vision are particularly advantageous for the reasons mentioned above (properties of peripheral vision).
In one embodiment, the step of adjusting the display includes the step of selecting one or more displays from among the displays constituting the human-machine interface.
In one embodiment, the method determines the preferred display to be used according to the direction of gaze in order to best capture the pilot's attention. The display closest to the foveal center can be selected.
In one embodiment, the physiological information comprises one or more of the parameters including the heart rate, the variability of the heart rate, the respiratory rate, the movements of the eyes, the gaze fixations, the pupillary dilation, the cortisol level, the skin temperature, skin conductivity, one or more markers of the activity of the parasympathetic system, an electrocardiography type signal, an electroencephalography type signal, a magnetoencephalography type signal, an fNIR type signal or a signal of type fMRI.
The level of cognitive load can be determined based on one or more physiological parameters of the user or pilot (and / or the dynamics of these parameters), measured physically and / or estimated logically, directly or indirectly. The determination of the physiological state of the pilot may include direct and / or indirect measurements. Direct measurements may in particular include one or more direct measurements of the heart rate and / or ECG (electrocardiogram) and / or EEG (electroencephalogram) and / or of the pilot's perspiration and / or rhythm. Indirect measurements may include, in particular, estimates of pilot excitement or fatigue or stress, which states may in particular be correlated with flight phases or other parameters.
The physiological parameters manipulated can include (indifferent order): gaze monitoring including monitoring eye movements and / or gaze fixation (in English “Nearest Neighbor index” or NRI), the cortisol level recovered at the level of saliva for example (“Hypothalamus Pituitary Adreanal” or HPA in English), heart rate, the variability of this heart rate (“Heart Rate Variability” in English, acronym HRV), one or more markers of the activity of the parasympathetic system, respiratory rate, skin temperature, sweating level, skin conductivity (galvanic skin response or GSR), pupil dilation (pupilometry or Index of Cognitive Activity (ICA) "), An ECG signal (electrocardiography), an EEG signal (electroencephalography), an MEG signal (magnetoencephalography), an fNIR signal (in English" functional nearinfrared imaging ") or an fM signal RI (in English "functional magnetic resonance").
In one embodiment, the "intrinsic" or "physiological" mental or cognitive load level is that which results from the aggregation of physiological data measured physically and directly on the pilot. These physiological values can be weighted together, so as to define a score between predefined limits (for example between 0 and 100), possibly customizable by pilot.
This level of cognitive load can result from physiological measures: it can "internalize" all of the internal cognitive constraints as well as the external events which are processed by the pilot's cognition. The cognitive load level as defined is therefore necessary and sufficient to participate in the regulation of the display according to the invention.
In some optional embodiments, the level of cognitive load is contextualized, due to external or "extrinsic" factors, such as the type of task performed (takeoff, cruise, diversion, review, ...) or the flight context ( noise, light, vibrations in a simulator, drone remote control cabin, A380, small plane, etc.). These factors considered in isolation for themselves can advantageously put into perspective the cognitive load of the pilot at a given time, ie can make it possible to determine the dynamics (e.g. likely evolution, historicized trends, etc.). Advantageously, the knowledge of the work context makes it possible to anticipate, at least in a probabilistic manner, the evolution of the cognitive load of the pilot, and consequently the adaptations of the display to his destination.
In one embodiment, the level of cognitive load is further determined as a function of environmental parameters including the flight context. Different tests (therefore active measures, additional to passive observation) can be carried out to evaluate the cognitive and / or physiological / biological data of the pilot. For example, the measurement of the pilot's cognitive load can be assessed by analyzing his current behavior on the current steering task (predefined criteria) and / or by additional tests (opportunistic, incidents, etc.) offered optionally during the interstitial time intervals during which it is determined that the pilot can accept such tests. These evaluations can in turn lead to an adaptation of the display.
In one embodiment, the monitoring of the gaze determines, as a function of predefined fixing durations and / or predefined eye movements, actions comprising enlarging the message, reducing the message, sending data or sending '' a piloting command.
In one embodiment, the method further comprises a step consisting in acquiring at least one image of the human-machine interface.
In one embodiment, the geometry of the cockpit is known (number, types and inclinations of the screens, etc.), by manual configuration or by means of a configuration file.
In a particular embodiment, the geometry of the cockpit can be known semi-automatically, or even fully automatically. For example, a "return" loop (for example in the form of a camera capturing the subjective visual point of view of the pilot) makes it possible to detect the number, type and configuration of each of the screens present (eg clipping, etc. ). The step of acquiring a (global) image of the human-machine interface is advantageous in that it allows automatic configuration of the display.
In one embodiment, the method further comprises a step of measuring the visual density of the human-machine interface and the step of adjusting the display of the human-machine interface depending on the measured visual density .
In one embodiment, the method comprises a step consisting in deactivating the display of the man-machine interface after the user looks at a predefined location for a duration in excess of a predefined duration (graphic adjustments “ disengageable ”).
In one embodiment, a system is described comprising means for implementing one or more of the steps of the method, the man-machine interface comprising - a plurality of displays chosen from a head-up display 120, a display head-down 130, a portable or transportable screen 140, a virtual and / or augmented reality display 150, one or more projectors 160, a camera or image acquisition device 170;
- a device for monitoring the gaze of the user of the man-machine interface.
In one embodiment, the system further includes an augmented reality and / or virtual reality headset.
In one embodiment, the system further comprises a regulation feedback loop in the presence of a camera for the acquisition of an image at least approximate of the subjective view by the user of the man-machine interface.
In one embodiment, the display is characterized by the application of predefined display rules, which are a function of i) the overall state of the pilot 310 (including a weighting between a) cognitive load, b) stress level , c) flight phase activity in progress, external environmental parameters internalized in the cognitive sense) and / or ii) the direction of gaze 320.
There are several ways to adjust display 330. In one embodiment, the display is modified (direct implication). In one embodiment, the rules governing the management of the display are affected (indirect implication via a change in the regulation rules).
In a development, the display in the human-machine interface is governed by predefined rules, these rules comprising display location rules and display priority rules. The mapping of these man-machine interfaces is defined according to the actual implementation configuration (simulator, type of aircraft, type of mission). For example, a user (or an instructor or an airline) can manually predefine the different areas of space to display this or that type of information. All or part of the screens and associated human-machine interfaces can be transposed or deported inside a virtual or augmented reality space (if applicable). Rules for substitution of images or image streams are possible. Certain rules may be associated or provide for different display priorities, minimum and maximum display durations, permanent, intermittent, regular or irregular displays, optional and replaceable displays, non-deactivatable displays, terms or parameters of '' display (luminance, surface, texture, etc.)
In a development, the location rules are predefined. In one embodiment, the man-machine interfaces can be configured in and for a specific cockpit by the instructor according to his personal preferences. In one embodiment, this configuration can be entirely manual. For example, the instructor can provide windows in the helmet for instructional windows that do not interfere with students' vision. These windows can be virtually attached to the cockpit: for example, the instructor can configure the content of these windows (aircraft parameters, parameters relating to pilots, performance monitoring, etc.). The definition of virtual space can therefore be associated with the geometric characteristics of the cockpit or of the flight simulator.
In general, the display of one or more symbols can be optimized (i.e. adapted, for example to the current revision and / or to the flight context). Specifically, the selected interaction model (translated by the display of corresponding graphic symbols) can be distributed on the different screens in an optimized way (e.g. distribution or spatial distribution of information on the different screens available and / or accessible). For example, in terms of space, the display can be distributed or split or distributed among several display devices, if necessary. For example, optionally, the method can shift or move graphically, for example during the input time, the entire display to allow the substitution model to be displayed at the limits of this display area.
The display of the value can for example be carried out at different places in the pilot's visual field, for example near the revision means (finger, cursor) or at other places in the cockpit (head-up projection, overprinting augmented reality, 3D rendering etc.). In temporal matters, the graphic symbol may include display sequences ordered in various ways.
In one embodiment, feedback (for example from keyboard input) can be displayed in the peripheral field. In one embodiment, the display of the feedback is conditioned on the fact that the user's gaze is fixed and / or is directed towards a certain predefined area.
According to this embodiment, on the contrary, the feedback remains in the peripheral zone if the path of the gaze does not meet predefined conditions (e.g. according to time and / or space criteria). In one embodiment, the feedback can "follow" the gaze, so as to remain in a state in which it can be requested, but without obstructing the current location of the gaze.
To this end, the visual field can be "discretized" according to different zones. Qualitatively, the zones can be qualified (for example) in frontal obstruction zone, zone of attraction of the strong, moderate, weak, null attention, etc. Quantitatively, these zones can be determined in a quantified or objective manner (perimeters of space, precise distances with tolerances, confidence intervals, etc.). The discretized areas of the visual field can be determined "universally" (for a group of users), or in a personalized and individual manner.
In one embodiment, the visual density 341 is adjusted and / or the display in the peripheral field of vision 342 is adjusted.
"Visual density" can be manipulated. Visual density can be measured in number of lit or active pixels per square centimeter, and / or in number of alphanumeric characters per unit of area and / or in number of predefined geometric patterns per unit of area. Visual density can also be defined, at least partially, according to physiological criteria (model of speed of reading by the pilot, etc.).
In one embodiment of the invention, this visual density can be kept substantially constant. In other embodiments, the visual density will be adjusted. The flight context for example can modulate this visual density (for example, on landing or in the critical phases of the flight, the information density can be deliberately reduced or conversely a maximum of information can be displayed) .
FIG. 4 details a particular embodiment concerning the management of the peripheral field of vision.
The cognitive load of the pilot 420 is determined from sensors 410.
One or more parameters can condition the selection 430 of one or more displays (120, 130, 140, 150), possibly knowing the geometry of the cockpit 431 or the available RV / RA equipment.
In one embodiment, it is the direction of the gaze that conditions the selection of the display, and not the cognitive load. Cognitive load, on the other hand, conditions the distance between the display of messages and the center of the vision cone.
In one embodiment, the cognitive load alone conditions the selection of the display. For example, in certain saturation or emergency situations, most of the screens can be switched off and only one screen can be used (regardless of the direction of gaze).
In one embodiment, the two factors intervene (in equal parts or according to different weights or compromises). The direction of the gaze and the cognitive load directly or indirectly influence the selection of the display (one or more displays).
As a result, a peripheral vision message manager 440 in interaction with the avionic systems 441 displays one or more messages on the selected displays or display systems (120, 130, 140, 150, 160).
The sensors 410 are sensors, in particular physiological, direct or indirect, measuring physical or chemical parameters. The sensors 410 can in particular include eye tracking devices (eg eye positions, movements, gaze direction, pupil dilation, detection of blinks and their frequency, etc.), devices for measuring physical parameters of the environment ( eg ambient brightness, hygrometry, sound level, exhaled CO2, etc.), and physiological measurement devices (eg heart rate; respiratory rate; EEG; ECG; sweating level eg neck, hand; eye humidity; movement of the pilot's body , head, hands, feet, etc.).
The set of raw signals is centralized and analyzed in a logic, called “cognitive load analyzer” 420, which interprets and categorizes all of the parameters measured into predefined categories, in particular according to discretized “cognitive load” levels (eg “ maximum cognitive load "," rapidly growing cognitive load "," zero cognitive load (sleep) ", etc.).
The discretization of the states of cognitive load can be modulated or attenuated or nuanced by many other optional additional parameters, for example according to stress levels also quantified (eg "stress absent" or "maximum stress (landing)". intense can thus take place in the absence of stress or on the contrary in a state of maximum stress. In addition, the context of flight can promote or inhibit stress and / or cognitive load (which differs according to the phases of flight takeoff, climb, cruise, diversion, revision of the flight plan, landing, taxiing, etc.).
The different categories thus obtained can each be associated with rules for managing the display of messages 440.
The management of the display can be carried out according to several objectives, in particular attention criteria. The human brain, although correctly perceiving visual signals (borderline without the appearance of eye rivalry), can in fact "hide" pieces of information in certain circumstances. For example, a driver who is completely focused on a overtaking maneuver may ignore a speeding light, although his visual system has perfectly decoded and fused the visual stimuli. A pilot focused on an object on the horizon can abstract objects in the foreground, possibly dangerous. This attentional rivalry, being exclusive and specific to each brain, cannot be measured or inferred ("black box"). On the other hand, reminders (and their variations, to avoid addictions) can be judiciously implemented, in order to ultimately support the attention of the user (placed in a situation of risk or decision by hypothesis, justifying a keeping in a particular state of awareness or consciousness).
According to the embodiments of the invention, all or part of the available displays will be used.
In a particular embodiment, a single screen is selected, depending on the direction of gaze. Advantageously, the selection of a single screen (for example the one which is the shortest distance from the current location of the gaze) allows rapid and natural interaction (in the sense that it minimizes disturbances or visual changes). One of the remarkable advantages of the invention resides in the fact of "abstracting" from the displays and of following the gaze so that the pilot can interact quickly.
In a particular embodiment, one or more screens ("displays", "displays") are selected, depending on the cognitive or mental load determined and / or the direction of gaze.
The selection of displays 430 can be carried out in various ways.
In one embodiment, the selection of displays can in particular take advantage of the knowledge of the geometry of the cockpit (surfaces of the different screens, their shapes, their positions in space, which are most frequently consulted by the pilot, where is the foveal zone, etc.). Since this configuration changes little, this initial configuration solution may be sufficient and satisfactory.
In one embodiment, this knowledge of the geometry can be determined or achieved by the use of an image capture in subjective view (for example a video camera mounted on the helmet worn by the pilot can make it possible to approach this that he perceives from his environment, arranged with multiple screens). As a result, the desired display region can be mapped to a particular screen (addressing). This development provides flexibility (and does not necessarily require manual calibration).
In a particular embodiment, the display of the man-machine interface can take into account the position of the operator and / or the direction of gaze, in particular so as to always be presented in accordance with the geometry of the cabin or according to an optimized subjective view.
In a particular embodiment, the existence of a video return loop can be particularly advantageous, in that it can allow, in particular coupled with the use of projectors to redefine the visual environment of the user dynamically. , for example by adapting to the environment (using automated or automatic methods).
The use in combination of the man-machine interface gaze tracking device according to the invention is particularly advantageous, in that it makes it possible to modulate or regulate or influence the location rules and / or priority rules d display.
In certain advantageous embodiments (e.g. in the case of a cognitive tunnel), the screens that are too distant are turned off. Variants provide for the gradual decrease in visual density (to minimize disturbance to the pilot).
As regards the management of messages in peripheral vision 440, numerous embodiments are possible.
FIG. 5 illustrates an example of management of the display in the peripheral field of vision.
Knowing the overall state of the pilot (internal cognition), the external parameters (flight phase, priority of a message), the method according to the invention comprises a step consisting in receiving a message from the avionic systems (eg meteorological alert, flight instruction, ATC instruction, diversion, etc.). This message can be associated with a variable priority or criticality level.
The process can arbitrate the display, depending on factors including the pilot’s cognitive state (mental load) and the direction of his gaze.
A completely specific embodiment which would not be limiting is described below. In step 510, the pilot's gaze is determined to be focused at a given location in the man-machine interface. In one embodiment, a priority message in excess of a predefined threshold is sent by the avionics services: this message requires the attention of the pilot in the short term. In another embodiment, the pilot's state of consciousness requires maintaining or raising his attention: a specific message must stimulate the pilot.
Consequently, in step 520, a message is displayed on the periphery. In an advantageous embodiment, this message vibrates. In one embodiment, the message vibrates according to modalities which are functions of the predefined criticality associated with the message (the form depends on the substance). Vibrations (i.e. movements of small amplitude) in peripheral vision are particularly advantageous for the reasons mentioned above (properties of peripheral vision).
In step 530, the pilot continues his activity, whether or not he has perceived (visually) the message displayed on the periphery: his gaze moves. The message, instead of remaining in place, moves correlatively to the movement of the gaze (in different ways: proportional, homothetic, etc.). This step 530 gradually draws the attention of the pilot.
In step 540, the pilot always looks elsewhere i.e. does not always consult the message. The management of the display then intentionally brings the display of the symbol / icon of the active location closer to the pilot's gaze. In one embodiment, the spatial distance between the vibrating message and the location of the gaze may in particular be a function of the pilot's cognitive load. In one embodiment, the message approaches foveal vision when a predefined mental load level has been reached. In one embodiment, the distance can be a function of this cognitive load and of the criticality of the message. The distance governing function can be a mathematical (analytical) function but can also be of an algorithmic nature (result of a series of steps, which cannot be formulated analytically). The function can for example decrease the distance over time. Depending on the case, the reduction can be carried out at constant speed, or accelerate.
In step 550, the pilot becomes aware of the message, looks at it and "sees" it. At this moment, the vibrations of the message cease. In other words, if the gaze is directed towards the message, the latter freezes (the vibrations disappear) to allow the opening of the message and its processing.
Different embodiments are possible.
Several methods exist to determine that a graphic object (such as a message) has been "seen", and more generally to control the human-machine interface (selection of a displayed object). These modalities can be combined.
In one embodiment, it is determined whether the path of the pilot's gaze (ie of his projection on the plane or the space of the man-machine interface) crosses that of a predefined area (for example around a particular object of the interface). The zone can be defined strictly, i.e. according to strict contours but also according to tolerance margins (e.g. progressive buffer zones, etc.). These margins can be associated with measurement errors in eye tracking. It can be determined one or more points of intersection between one or more zones of the human-machine interface and the path of the gaze (or simply its location) at a given time. These intersection points can in turn trigger one or more actions (e.g. confirmation of sending a response, other display, etc.). In one embodiment, the duration during which the pilot's gaze crosses a predefined zone associated with a message is a parameter which can be manipulated by the method according to the invention.
In one embodiment, the ocular path or path of the gaze crosses (“intersects”, “passes over”) the message; this exclusively spatial condition, necessary and sufficient, determines that the message has been seen. This embodiment has the advantage of not disturbing the "natural" activity of the pilot. In one embodiment, the conditions are spatial and temporal: a predefined minimum duration is required (the pilot must remain a few fractions of a second on the message so that the latter is considered to be seen). In one embodiment, the crossing time is maximum (if the gaze lingers over a given area, a different action may be required). In one embodiment, the duration of the crossover must be included in a predefined interval (either absolute i.e. whatever the message, or relating to the type of message or its content), between a minimum duration and a maximum duration.
In step 560, the pilot's gaze goes over a predetermined time on the message, the message is then "open". For example, an upper display surface is used, adjacent to the initial symbol.
Several interactions are then possible. In step 561, the pilot can watch the response to be sent and validate it with physical means to secure the interaction.
For example, in the aeronautical field, for critical operations such as interactions with air navigation control (ATC), such as sending a datalink type message, physical or haptic confirmation may be required (by convention). For driving a land vehicle, partially automated, in a similar way, certain physical acts may be required (e.g. for overtaking, etc.)
The pilot can also control - by look - a preconfigured action in or by the message (e.g. confirmation of an order, selection from a list, closing of a window, confirmation, cancellation, etc.). In another suite, the pilot can simply ignore the message in step 570, which will then be “reduced” in step 580 (for example at a usual location or invariant in the space defined by the man-machine interface ). By reduction, it can be understood that the message is miniaturized (the display area is reduced) or that a symbol representing it is substituted for it. Other methods are possible. Combinations of interactions are possible: for example, after a confirmation 561, the message can also be reduced to the peripheral field.
In other words, receiving a request to alert or inform the pilot, the method according to the invention can display a dedicated icon in the peripheral field of the pilot at a distance depending on the current cognitive load of the pilot (the more it is high, the closer the icon will be; this position can vary between 10 ° and 30 ° for example). In one embodiment, the method according to the invention may comprise a step consisting in defining the display closest to the horizontal line associated with the current location of the gaze (eg a screen with a low head, a HUD, the visor helmet, windshield, etc.). This feature is advantageous ("seamless display"), i.e. minimizing graphic changes or surprise effects. The selection of the display can for example be carried out according to a database holding information on the position of the displays in the cockpit (and their shape, for example). The symbol or icon displayed may vibrate in order to capture the pilot’s attention. A vibration or oscillation is advantageous because man is more sensitive to movements than to shapes or colors in peripheral vision. In one embodiment, the icon can “follow” the pilot's gaze (e.g. mimetic or proportional or homothetic displacements). The follow-up can take place until a predefined duration has passed and / or until the pilot has viewed it (for example according to the level of criticality of the information). Once "viewed", a more concrete or extended or rich or detailed interactive message may appear. The pilot can process the message later; if necessary, the message can be placed in the position which is initially reserved for it. In the event that the pilot actually watches the message, he can read the information and choose to activate certain functions contained in the message, for example by looking at buttons dedicated to these actions (therefore by looking at the gaze). The human-machine interface according to the invention can be controlled by the gaze and / or physical input interfaces (eg buzzer, mouse, touchscreen, voice command): the interface can be exclusively controllable by the gaze, or exclusively controllable by physical means or even by a combination of the two types of control (ie look and physical action)
A computer program product is described, said computer program comprising code instructions making it possible to carry out one or more of the steps of the method, when said program is executed on a computer.
By way of example of a hardware architecture adapted to implementing the invention, a device may include a communication bus to which a central processing unit or microprocessor (CPU, acronym for "Central Processing Unit" in English) is connected, which processor can be multi-core or many-core; a read only memory (ROM, acronym for “Read Only Memory” in English) which may include the programs necessary for implementing the invention; a random access memory or cache memory (RAM, acronym for "Random Access Memory" in English) comprising registers suitable for recording variables and parameters created and modified during the execution of the aforementioned programs; and a communication or I / O interface (I / O acronym for "Input / ouput" in English) adapted to transmit and receive data. In the case where the invention is implemented on a reprogrammable computing machine (for example an FPGA circuit), the corresponding program (i.e. the sequence of instructions) can be stored in or on a storage medium removable (for example an SD card, or mass storage such as a hard disk eg an SSD) or non-removable, volatile or non-volatile, this storage medium being partially or completely readable by a computer or a processor. The reference to a computer program which, when executed, performs any of the functions described above, is not limited to an application program running on a single host computer. On the contrary, the terms computer program and software are used here in a general sense to refer to any type of computer code (for example, application software, firmware, microcode, or any other form of computer instruction, such as web services or SOA or via API programming interfaces) which can be used to program one or more processors to implement aspects of the techniques described here. IT resources or resources can in particular be distributed (Cloud computing), possibly with or according to peer-to-peer and / or virtualization technologies. The software code can be executed on any suitable processor (for example, a microprocessor) or processor core or a set of processors, whether provided in a single computing device or distributed among several computing devices (for example example as possibly accessible in the environment of the device). Security technologies (crypto-processors, possibly biometric authentication, encryption, smart card, etc.) can be used.
权利要求:
Claims (15)
[1" id="c-fr-0001]
claims
1. Method implemented by computer for managing a human machine interface comprising a plurality of displays, comprising the steps consisting in:
- receive information relating to the position of the eyes and the direction of gaze of a user on the man-machine interface;
- receive physiological information from the user;
- determining a level of cognitive load among a plurality of predefined levels as a function of the physiological information received;
- adjust the display of the man-machine interface according to the direction of gaze and the level of cognitive load determined.
[2" id="c-fr-0002]
2. The method of claim 1, further comprising a step of displaying on the periphery of the man-machine interface a graphic message at a configurable distance from the location of the man-machine interface on which the gaze of the user.
[3" id="c-fr-0003]
3. Method according to claim 2, the distance being a function of the cognitive load and / or of the priority associated with the message.
[4" id="c-fr-0004]
4. Method according to claim 3, the distance being decreasing over time.
[5" id="c-fr-0005]
5. Method according to any one of the preceding claims, the graphic message being displayed according to graphic modalities comprising displacements in translation and / or rotations, such as vibrations, these displacements and / or rotations being a function of the content of the message.
[6" id="c-fr-0006]
6. The method of claim 1, the step of adjusting the display comprising the step of selecting one or more displays from among the displays constituting the man-machine interface.
[7" id="c-fr-0007]
7. The method as claimed in claim 1, the physiological information comprising one or more parameters comprising the heart rate, the variability of the heart rate, the respiratory rate, the movements of the eyes, the gaze fixations, the pupillary dilation, the cortisol level, skin temperature, skin conductivity, one or more markers of activity of the parasympathetic system, an electrocardiography signal, an electroencephalography signal, a magnetoencephalography signal, an fNIR signal or a signal fMRI type.
[8" id="c-fr-0008]
8. The method of claim 1, the cognitive load level being further determined as a function of environment parameters including the flight context.
[9" id="c-fr-0009]
9. Method according to any one of the preceding claims, the monitoring of the determining gaze, as a function of predefined fixing durations and / or predefined eye movements, of actions comprising enlarging the message, reducing the message, sending data or a control command.
[10" id="c-fr-0010]
10. Method according to any one of the preceding claims, further comprising a step consisting in acquiring at least one image of the human-machine interface.
[11" id="c-fr-0011]
11. Method according to any one of the preceding claims, further comprising a step consisting in measuring the visual density of the man-machine interface and the step consisting in adjusting the display of the man-machine interface being a function of the visual density measured.
[12" id="c-fr-0012]
12. A computer program product, said computer program comprising code instructions making it possible to carry out the steps of the method according to any one of claims 1 to 11, when said program is executed on a computer.
[13" id="c-fr-0013]
13. System comprising means for implementing the steps of the method according to any one of claims 1 to 11, the man-machine interface comprising
a plurality of displays chosen from a head-up display 120, a head-down display 130, a portable or transportable screen 140, a display in virtual and / or augmented reality 150, one or more projectors 160, a camera or device image acquisition 170;
- a device for monitoring the gaze of the user of the man-machine interface.
[14" id="c-fr-0014]
14. The system of claim 13, further comprising an augmented reality and / or virtual reality headset.
[15" id="c-fr-0015]
15. System according to any one of claims 13 to 14, further comprising a regulation feedback loop in the presence of a camera for the acquisition of an image at least approximate of the subjective view by the user of the 'Human Machine Interface.
1/4
类似技术:
公开号 | 公开日 | 专利标题
EP3525066A1|2019-08-14|Peripheral vision in a human-machine interface
US9619712B2|2017-04-11|Threat identification system
JP6929644B2|2021-09-01|Systems and methods for gaze media selection and editing
US9812046B2|2017-11-07|Mixed reality display accommodation
US9030495B2|2015-05-12|Augmented reality help
CN106471419B|2019-06-18|Management information is shown
KR101935061B1|2019-01-03|Comprehension and intent-based content for augmented reality displays
US20160343168A1|2016-11-24|Virtual personification for augmented reality system
US20170337352A1|2017-11-23|Confidential information occlusion using augmented reality
US20170364153A1|2017-12-21|User status indicator of an augmented reality system
US10065658B2|2018-09-04|Bias of physical controllers in a system
US10067737B1|2018-09-04|Smart audio augmented reality system
US10684469B2|2020-06-16|Detecting and mitigating motion sickness in augmented and virtual reality systems
US10089791B2|2018-10-02|Predictive augmented reality assistance system
US20210092081A1|2021-03-25|Directional augmented reality system
CN111949131A|2020-11-17|Eye movement interaction method, system and equipment based on eye movement tracking technology
US20180005444A1|2018-01-04|Augmented reality failsafe mode
CN112507799A|2021-03-16|Image identification method based on eye movement fixation point guidance, MR glasses and medium
EP3467570B1|2021-08-11|Binocular rivalry management
同族专利:
公开号 | 公开日
US10914955B2|2021-02-09|
US20190250408A1|2019-08-15|
CN110155349A|2019-08-23|
FR3077900B1|2020-01-17|
EP3525066A1|2019-08-14|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20100156617A1|2008-08-05|2010-06-24|Toru Nakada|Apparatus, method, and program of driving attention amount determination|
US20160272340A1|2014-12-24|2016-09-22|Environmental Tectonics Corporation|Aircraft-vision systems and methods for maintaining situational awareness and spatial orientation|
WO2017031089A1|2015-08-15|2017-02-23|Eyefluence, Inc.|Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects|
US20170160546A1|2015-12-02|2017-06-08|Rockwell Collins, Inc.|Worn display using a peripheral view|
FR3046225A1|2015-12-29|2017-06-30|Thales Sa|DISPLAY OF WEATHER DATA IN AN AIRCRAFT|EP3816703A1|2019-10-31|2021-05-05|Airbus Helicopters|Method to assist piloting of an aircraft|NZ560457A|2007-08-15|2010-02-26|William Bryan Woodard|Image generation system|US9958939B2|2013-10-31|2018-05-01|Sync-Think, Inc.|System and method for dynamic content delivery based on gaze analytics|
US10952668B2|2019-02-14|2021-03-23|Bose Corporation|Pilot workload monitoring system|
US11137875B2|2019-02-22|2021-10-05|Microsoft Technology Licensing, Llc|Mixed reality intelligent tether for dynamic attention direction|
US10901505B1|2019-10-24|2021-01-26|Tectus Corporation|Eye-based activation and tool selection systems and methods|
US11093033B1|2019-10-28|2021-08-17|Facebook, Inc.|Identifying object of user focus with eye tracking and visually evoked potentials|
FR3110007A1|2020-05-05|2021-11-12|Thales|Interaction system with a plurality of visual zones and interaction assembly in the cockpit of an aircraft comprising such an interaction system|
US11243400B1|2020-07-17|2022-02-08|Rockwell Collins, Inc.|Space suit helmet having waveguide display|
CN112450950B|2020-12-10|2021-10-22|南京航空航天大学|Brain-computer aided analysis method and system for aviation accident|
法律状态:
2019-01-25| PLFP| Fee payment|Year of fee payment: 2 |
2019-08-16| PLSC| Publication of the preliminary search report|Effective date: 20190816 |
2020-01-27| PLFP| Fee payment|Year of fee payment: 3 |
2021-01-26| PLFP| Fee payment|Year of fee payment: 4 |
2022-01-27| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
申请号 | 申请日 | 专利标题
FR1800125|2018-02-12|
FR1800125A|FR3077900B1|2018-02-12|2018-02-12|PERIPHERAL VISION IN A MAN-MACHINE INTERFACE|FR1800125A| FR3077900B1|2018-02-12|2018-02-12|PERIPHERAL VISION IN A MAN-MACHINE INTERFACE|
EP19156219.8A| EP3525066A1|2018-02-12|2019-02-08|Peripheral vision in a human-machine interface|
US16/272,724| US10914955B2|2018-02-12|2019-02-11|Peripheral vision in a human-machine interface|
CN201910111729.9A| CN110155349A|2018-02-12|2019-02-12|Peripheral vision in man-machine interface|
[返回顶部]